383 research outputs found

    Constrained LQR for Low-Precision Data Representation

    Get PDF
    Performing computations with a low-bit number representation results in a faster implementation that uses less silicon, and hence allows an algorithm to be implemented in smaller and cheaper processors without loss of performance. We propose a novel formulation to efficiently exploit the low (or non-standard) precision number representation of some computer architectures when computing the solution to constrained LQR problems, such as those that arise in predictive control. The main idea is to include suitably-defined decision variables in the quadratic program, in addition to the states and the inputs, to allow for smaller roundoff errors in the solver. This enables one to trade off the number of bits used for data representation against speed and/or hardware resources, so that smaller numerical errors can be achieved for the same number of bits (same silicon area). Because of data dependencies, the algorithm complexity, in terms of computation time and hardware resources, does not necessarily increase despite the larger number of decision variables. Examples show that a 10-fold reduction in hardware resources is possible compared to using double precision floating point, without loss of closed-loop performance

    Energy-aware MPC co-design for DC-DC converters

    Get PDF
    In this paper, we propose an integrated controller design methodology for the implementation of an energy-aware explicit model predictive control (MPC) algorithms, illustrat- ing the method on a DC-DC converter model. The power consumption of control algorithms is becoming increasingly important for low-power embedded systems, especially where complex digital control techniques, like MPC, are used. For DC-DC converters, digital control provides better regulation, but also higher energy consumption compared to standard analog methods. To overcome the limitation in energy efficiency, instead of addressing the problem by implementing sub-optimal MPC schemes, the closed-loop performance and the control algorithm power consumption are minimized in a joint cost function, allowing us to keep the controller power efficiency closer to an analog approach while maintaining closed-loop op- timality. A case study for an implementation in reconfigurable hardware shows how a designer can optimally trade closed-loop performance with hardware implementation performance

    Robust explicit MPC design under finite precision arithmetic

    Get PDF
    We propose a design methodology for explicit Model Predictive Control (MPC) that guarantees hard constraint satisfaction in the presence of finite precision arithmetic errors. The implementation of complex digital control techniques, like MPC, is becoming increasingly adopted in embedded systems, where reduced precision computation techniques are embraced to achieve fast execution and low power consumption. However, in a low precision implementation, constraint satisfaction is not guaranteed if infinite precision is assumed during the algorithm design. To enforce constraint satisfaction under numerical errors, we use forward error analysis to compute an error bound on the output of the embedded controller. We treat this error as a state disturbance and use this to inform the design of a constraint-tightening robust controller. Benchmarks with a classical control problem, namely an inverted pendulum, show how it is possible to guarantee, by design, constraint satisfaction for embedded systems featuring low precision, fixed-point computations

    PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference

    Full text link
    Field-programmable gate arrays (FPGAs) are widely used to implement deep learning inference. Standard deep neural network inference involves the computation of interleaved linear maps and nonlinear activation functions. Prior work for ultra-low latency implementations has hardcoded the combination of linear maps and nonlinear activations inside FPGA lookup tables (LUTs). Our work is motivated by the idea that the LUTs in an FPGA can be used to implement a much greater variety of functions than this. In this paper, we propose a novel approach to training neural networks for FPGA deployment using multivariate polynomials as the basic building block. Our method takes advantage of the flexibility offered by the soft logic, hiding the polynomial evaluation inside the LUTs with zero overhead. We show that by using polynomial building blocks, we can achieve the same accuracy using considerably fewer layers of soft logic than by using linear functions, leading to significant latency and area improvements. We demonstrate the effectiveness of this approach in three tasks: network intrusion detection, jet identification at the CERN Large Hadron Collider, and handwritten digit recognition using the MNIST dataset
    corecore